Learning the Kernel Function via Regularization
نویسندگان
چکیده
We study the problem of finding an optimal kernel from a prescribed convex set of kernels K for learning a real-valued function by regularization. We establish for a wide variety of regularization functionals that this leads to a convex optimization problem and, for square loss regularization, we characterize the solution of this problem. We show that, although K may be an uncountable set, the optimal kernel is always obtained as a convex combination of at most m+2 basic kernels, where m is the number of data examples. In particular, our results apply to learning the optimal radial kernel or the optimal dot product kernel.
منابع مشابه
Learning with Regularization Networks
In this work we study and develop learning algorithms for networks based on regularization theory. In particular, we focus on learning possibilities for a family of regularization networks and radial basis function networks (RBF networks). The framework above the basic algorithm derived from theory is designed. It includes an estimation of a regularization parameter and a kernel function by min...
متن کاملGeneralization Performance of Regularization Networks and Support Vector Machines via Entropy Numbers of Compact Operators Produced as Part of the Esprit Working Group in Neural and Computational Learning Ii, Neurocolt2 27150 Introduction 1
We derive new bounds for the generalization error of kernel machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs make use of a viewpoint that is apparently novel in the eld of statistical learning theory. The hypothesis class is described in terms of a linear operator mapping from a possibly innnite dimension...
متن کاملGeneralization Performance of Regularization Networks and Support Vector Machines via Entropy Numbers of Compact Operators Produced as Part of the Esprit Working Group in Neural and Computational Learning Ii, Neurocolt2 27150
We derive new bounds for the generalization error of kernel machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs make use of a viewpoint that is apparently novel in the eld of statistical learning theory. The hypothesis class is described in terms of a linear operator mapping from a possibly innnite dimension...
متن کاملGeneralization performance of regularization networks and support vector machines via entropy numbers of compact operators
We derive new bounds for the generalization error of kernel machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs make use of a viewpoint that is apparently novel in the field of statistical learning theory. The hypothesis class is described in terms of a linear operator mapping from a possibly infinite-dimens...
متن کاملRegularization for Multiple Kernel Learning via Sum-Product Networks
In this paper, we are interested in constructing general graph-based regularizers for multiple kernel learning (MKL) given a structure which is used to describe the way of combining basis kernels. Such structures are represented by sumproduct networks (SPNs) in our method. Accordingly we propose a new convex regularization method for MLK based on a path-dependent kernel weighting function which...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Journal of Machine Learning Research
دوره 6 شماره
صفحات -
تاریخ انتشار 2005